We study a Newton-like method for the minimization of an objective functionthat is the sum of a smooth convex function and an l-1 regularization term.This method, which is sometimes referred to in the literature as a proximalNewton method, computes a step by minimizing a piecewise quadratic model of theobjective function. In order to make this approach efficient in practice, it isimperative to perform this inner minimization inexactly. In this paper, we giveinexactness conditions that guarantee global convergence and that can be usedto control the local rate of convergence of the iteration. Our inexactnessconditions are based on a semi-smooth function that represents a (continuous)measure of the optimality conditions of the problem, and that embodies thesoft-thresholding iteration. We give careful consideration to the algorithmemployed for the inner minimization, and report numerical results on two testsets originating in machine learning.
展开▼